HomeAboutMailing ListList Chatter /0/0 18.118.149.55

my thoughts on the xz backdoor

2024-03-31 by: flushy@flushy.net
From: flushy@flushy.net
------------------------------------------------------

gist version of this is here:

https://gist.github.com/gonoph/c41630716d594e61a69477760ac045ae

This is written in Markdown format.

# XZ Backdoor summary and thoughts. TL;DR:

This is terrifying. Basically, someone infiltrated the xz upstream, then
started committing code that injected a backdoor into the xz library via 
the
test suite via special crafted rogue compressed test files and 
obfuscated shell
code. The end result is that sshd when patched to work with systemd (to 
notify
systemd it's ready) would link to liblzma and execute the backdoor
upon login attempts.

# Longer version:

The technical aptitude is overshadowed by the [social engineering and 
breech of
trust][1] or [bribe of blackmail of a legitimate user (AV-601)][2].

[1]: https://attack.mitre.org/techniques/T1199/
[2]: 
https://sap.github.io/risk-explorer-for-software-supply-chains/#/attacktree?av=AV-601

So some of the evidence for this is "locked up" as GitHub suspended both
maintainers of xz and the project. There is a mirror of xz on the 
project
website, but at this point "[everything is suspect][3]" until proven 
otherwise.

[3]: https://news.ycombinator.com/item?id=39870098

There is also a [FAQ put together by Sam James][4], and a [response by 
the
other maintainer, Lasse Collin][5].

[4]: https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
[5]: https://tukaani.org/xz-backdoor/

## Background

[Andres Freund posted his story on oss-security][6]. Basically he 
noticed on his
debian system that ssh server had some odd timing issues after upgrading
lzma/xz-utils. He started poking around and found some odd tracing 
behavior,
and noticed some odd calls being made to libraries: (a) WHEN the calls 
where
made - aka sshd should not be making those calls to that library at that 
time,
(b) WHERE in the library the calls were happening, and (c) it was 
consuming a
lot of cpu inside the library and inside sshd.

[6]: https://www.openwall.com/lists/oss-security/2024/03/29/4

After some investigation, he learned that there was a backdoor implanted 
in
liblzma.so at BUILD TIME, and crucially it was implanted in (a) PLAIN 
SIGHT,
(b) in FOUR PHASES, and (c) via one of the PROJECT ADMINS.

### Implanted in PLAIN SIGHT

Jia Tan (@JiaT75) committed changes to xz and released two versions of 
xz and
xz-utils - one in Jan of 2024, and the other in Feb of 2024. Then as is
common, he crafted the corresponding "tarball" archives to go with the
releases - and signed them with his private GPG key.

@JiaT75 also [committed to several other projects][7], and started a 
campaign to
encourage other maintainers of other distros to use the new released 
version
of xz - but highlighted to use the archive and not the github source.

[7]: 
https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27#other-projects

Phase 1: Starting in January (at least what we know about THIS BACKDOOR 
-
there could be more), the author, Jia Tan (@JiaT75), committed several 
changes
under the guise of adding compressed files (blobs) to be used in the 
test
harness of the lzma library. This is very common, and not unusual for 
something
like a compression library.

Phase 2: In February, @JiaT75 started the next phase: backdoor 
activation. The
test files uploaded to the source repository in January were actually
obfuscated shell and C code. To activate the backdoor, Jia added some 
fixes to
the test harness, resolved some real issues, fixed some code that would 
cause
the backdoor to not work in some environments, and then made another 
release.
Crucially, the tarball that was in the release did not have the same 
code as
the git repository - which is NOT NORMAL. This tarball contained the 
activation
code to inject the backdoor into the source at build time. If you 
installed
from the git source, then this activation was not present.

Phase 3: Jan and someone else then embarked on a campaign to influence 
other
distro maintainers to (a) upgrade to the newest release of xz, (b) to 
only use
the release archive (tarball), and (c) to only open any bugs or concerns 
with
him [directly instead of publicly][8]. Jan did this to Fedora, but 
[someone
else][9] attempted to do this for Debian.

[8]: https://news.ycombinator.com/item?id=39866275
[9]: https://news.ycombinator.com/item?id=39868390

Phase 4: Jan and possibly others in parallel efforts to Phase 1, 2, and 
3 - had
submitted PR requests to various projects that could have potentially 
detected
the backdoor. One being [xz to disable sandboxing][10] (notice the lone 
period
"+." in CMakeLists - it would fail the compile test). Another being
[oss-fuzz][11], where it looks like the author helped the maintainer of
oss-fuzz unknowingly disable a check that was preventing the liblzma 
exploit
from working in this test environment. The question being: "this PR 
could have
prevented discovery of this issue?" Also looks like they attempted to 
release
new versions for [MingGW, arch linux, and alpine][12].

[10]: 
https://git.tukaani.org/?p=xz.git;a=commitdiff;h=328c52da8a2bbb81307644efdb58db2c422d9ba7
[11]: https://github.com/google/oss-fuzz/pull/10667
[12]: https://news.ycombinator.com/item?id=39870098

## Technical workings

### Payload extraction

When using the release tarball, the backdoor was injected into the code 
via the
build process. The author obfuscated the code inside a compressed test 
file,
which was extracted, uncompressed and passed to a shell to be ran 
directly.
Another payload was also extracted from parts of the test file, 
de-mangled from
"bad compress data" into "good compress data" and then uncompressed and 
saved
as a payload source to be compiled along with the original source.

This payload was only extracted and built on:

1) x86_64 architectures
2) and Debian or RPM based systems
3) detection if the distro uses systemd

### Payload execution

Bulk taken from the [oss-security thread][6] and [Sam James][4] above.

sshd utilizes systemd-notify to notify systemd that it is ready for
connections. So sshd pulls in libsystemd which pulls in liblzma.

When the liblzma library is 1st loaded, it performs checks to see if it 
should
be enabled:

1) not running in a terminal (TERM is blank)
2) it's executed as /usr/sbin/sshd
3) LD_DEBUG and LD_PROFILE are blank
4) LANG is set
5) not running in a debugger (mostly works)

If these pass, then the payload scans symbol tables in memory, adds an 
audit
hook to the dynamic linker to scan additional symbols that haven't 
loaded, and
once it detects certain function symbols from the sshd executable, then 
it
injects a hook for those functions to run it's own payload 1st. This is
implemented as a "store and forward" aka: saving the old value, updating 
the old
function pointer to itself, and calling the saved function value when 
it's
payload completes.

Once it has injected itself as a store and forward, any incoming ssh 
connection
to the sshd server is intercepted via the payload during the 
authorization
phase of the connection, and the best guess at this point is that the 
payload
is designed to let a specific set of SSH private keys (or fingerprints 
of them)
to log into the system by bypassing the normal authorization process.

One thing that Sam James makes clear that I agree with:

> We know that the combination of systemd and patched openssh are 
> vulnerable
> but pending further analysis of the payload, we cannot be certain that 
> other
> configurations aren't.

## Opinions and Aftermath

This did not affect any production versions of any Linux distros, but 
this type
of attack is terrifying.

### Broken Supply Chain

This makes me question everything about the supply chain. It will also 
add a
bigger burden to folks like Red Hat (Disclaimer: I work at Red Hat) and 
other
distributions. They will need to perform more extensive due diligence 
for any
upstream code incorporated into their distributions. In the end, this 
did not
affect any production versions of any distros, but it raises some 
questions.

* "How did this happen?"
* "How can we prevent this?"
* "How can we detect this?"

It also raises a question about

* "Do we know who the maintainers are?"

@JiaT75 used his private GPG key to sign commits and the release 
binaries. Has
anyone met him? Can we be sure that he is who he says he is? What level 
of
trust do the folks that know him and signed his key: do they trust that 
the
holder of that private key is him? Do others that signed the signers 
keys trust
that those folks are who they say they are? Rinse and Repeat. Trust 
networks
are complicated things and very difficult to built organically, 
especially in
this era of remote work and faceless communication.

### How did this happen?

Lasse Collin has been the sole maintainer of xz-utils for 15 years. In 
2022,[he
mentioned][13] he was [struggling][14] to keep up with demand and had 
long term
mental struggles. I understand this far too well. Was this an "attack of
opportunity" or was it "state sponsored?" I'm not sure anyone knows, 
yet. The
complexity of the attack, coupled with the long term infiltration, the 
targeted
architectures, VPNs used, and the [patch timelines][15] - it suggests 
multiple
people in multiple timezones.

[13]: https://www.mail-archive.com/xz-devel@tukaani.org/msg00567.html
[14]: https://news.ycombinator.com/item?id=39873552
[15]: https://twitter.com/birchb0y/status/1773871381890924872

### How can we prevent this?

I'm not really sure how to prevent this.

There are two main issues:

1) the technical problems
2) the people problems

#### Technical problems

At minimum upstreams should be running heuristics on code and identify
anomalies. Things like [oss-fuzz][16] is a great start. Also Mitre goes 
through
several scenarios and mitigation techniques for those scenarios. 
Examples
relevant being [Trusted Supply Chain][17] and [Trusted Relationship][18] 
attacks.
Also GPG key [trusts and signing][19] and [building the web of 
trust][20] and
[here][21]. In some circles, we've grown lax around this process, and we
shouldn't.

[16]: https://github.com/google/oss-fuzz
[17]: https://attack.mitre.org/techniques/T1195/
[18]: https://attack.mitre.org/techniques/T1199/
[19]: https://www.gnupg.org/gph/en/manual/x334.html
[20]: https://www.gnupg.org/gph/en/manual/x547.html
[21]: https://www.phildev.net/pgp/gpgtrust.html

#### The People Problems

Folks get burned out. They have trauma. They have debt. They have 
relationship
problems. They fall madly in love and drop everything else. Their pets 
pass
away. They lose a parent or they suddenly find they need to care for 
one. They
have a baby or start a family. They can pass silently in their sleep - 
like a
friend of mine, leaving behind a legacy of an open source karaoke 
project.

Point is: humans are complex and life can be fragile. Lots of things can
change the motivation and passion for something into a burden or into
something that just feels insignificant. We should contribute to 
projects:
either time and money, but what about those projects that are ran by the
lone-wolf?

I don't want corporate entities to sudden take interest in these 
projects and
take them over, but at the same time, we need stronger communities in 
these
spaces. We should be identifying these at risk projects, and then 
finding ways
to help.

This incident brings to mind the [XKCD comic][22].

[22]: https://xkcd.com/2347/

# DISCLAIMER

I'm a Red Hat employee, but this post/document is my own personal words,
feelings, and research into this matter. It does not represent Red Hat, 
nor
does it represent any of my colleagues or associates.

--b